Learning Maximum Likelihood Semi-Naive Bayesian Network Classifier
نویسندگان
چکیده
In this paper, we propose a technique to construct a sub-optimal semi-naive Bayesian network when given a bound on the maximum number of variables that can be combined into a node. We theoretically show that our approach has a less computation cost when compared with the traditional semi-naive Bayesian network. At the same time, we can obtain a resulting sub-optimal structure according to the maximum likelihood criterion. We conduct a series of experiments to evaluate our approach. The results show our approach is encouraging and promising. Keywords— Bayesian network, Semi-Naive, Bound, Integer programming .
منابع مشابه
On Supervised Learning of Bayesian Network Parameters
Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using ‘unsupervised’ methods such as likelihood maximization, as it has not been clear how to find the parameters maximizing the supervised likelihood or posterior globally. In this paper we show how this supervised learning problem can be solved e...
متن کاملWhen Discriminative Learning of Bayesian Network Parameters Is Easy
Bayesian network models are widely used for discriminative prediction tasks such as classification. Usually their parameters are determined using ‘unsupervised’ methods such as maximization of the joint likelihood. The reason is often that it is unclear how to find the parameters maximizing the conditional (supervised) likelihood. We show how the discriminative learning problem can be solved ef...
متن کاملSupervised Classification with Gaussian Networks. Filter and Wrapper Approaches
Bayesian network based classifiers are only able to handle discrete variables. They assume that variables are sampled from a multinomial distribution and most real-world domains involves continuous variables. A common practice to deal with continuous variables is to discretize them, with a subsequent loss of information. The continuous classifiers presented in this paper are supported by the Ga...
متن کاملSupervised Learning of Bayesian Network Parameters Made Easy
Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using ‘unsupervised’ methods such as maximization of the joint likelihood. In many cases, the reason is that it is not clear how to find the parameters maximizing the supervised (conditional) likelihood. We show how the supervised learning problem ...
متن کاملCalculating the Normalized Maximum Likelihood Distribution for Bayesian Forests
When learning Bayesian network structures from sample data, an important issue is how to evaluate the goodness of alternative network structures. Perhaps the most commonly used model (class) selection criterion is the marginal likelihood, which is obtained by integrating over a prior distribution for the model parameters. However, the problem of determining a reasonable prior for the parameters...
متن کامل